82 research outputs found

    What do foreign neighbors say about the mental lexicon?

    Get PDF
    A corpus analysis of phonological word-forms shows that English words have few phonological neighbors that are Spanish words. Concomitantly, Spanish words have few phonological neighbors that are English words. These observations appear to undermine certain accounts of bilingual language processing, and have significant implications for the processing and representation of word-forms in bilinguals.This research was supported in part by a grant from the National Institutes of Health to the University of Kansas through the Schiefelbusch Institute for Life Span Studies: National Institute on Deafness and Other Communication Disorders R01 DC 006472

    A web-based interface to calculate phonological neighborhood density for words and nonwords in Modern Standard Arabic

    Get PDF
    The availability of online databases (e.g., Balota et al., 2007) and calculators (e.g., Storkel & Hoover, 2010) has contributed to an increase in psycholinguistic-related research, to the development of evidence-based treatments in clinical settings, and to scientifically supported training programs in the language classroom. The benefit of online language resources is limited by the fact that the majority of such resources provide information only for the English language (Vitevitch, Chan & Goldstein, 2014). To address the lack of diversity in these resources for languages that differ phonologically and morphologically from English, the present article describes an online database to compute phonological neighborhood density (i.e., the number of words that sound similar to a given word) for words and nonwords in Modern Standard Arabic (MSA). A full description of how the calculator can be used is provided. It can be freely accessed at https://calculator.ku.edu/density/about

    The influence of clustering coefficient on word-learning: how groups of similar sounding words facilitate acquisition

    Get PDF
    A grant from the One-University Open Access Fund at the University of Kansas was used to defray the author’s publication fees in this Open Access journal. The Open Access Fund, administered by librarians from the KU, KU Law, and KUMC libraries, is made possible by contributions from the offices of KU Provost, KU Vice Chancellor for Research & Graduate Studies, and KUMC Vice Chancellor for Research. For more information about the Open Access Fund, please see http://library.kumc.edu/authors-fund.xml.Clustering coefficient, C, measures the extent to which neighbors of a word are also neighbors of each other, and has been shown to influence speech production, speech perception, and several memory-related processes. In this study we examined how C influences word-learning. Participants were trained over three sessions at 1-week intervals, and tested with a picture-naming task on nonword-nonobject pairs. We found an advantage for novel words with high C (the neighbors of this novel word are likely to be neighbors with each other), but only after the 1-week retention period with no additional exposures to the stimuli. The results are consistent with the spreading-activation network-model of the lexicon proposed by Chan and Vitevitch (2009). The influence of C on various language-related processes suggests that characteristics of the individual word are not the only things that influence processing; rather, lexical processing may also be influenced by the relationships that exist among words in the lexicon

    The Influence of Closeness Centrality on Lexical Processing

    Get PDF
    The present study examined how the network science measure known as closeness centrality (which measures the average distance between a node and all other nodes in the network) influences lexical processing. In the mental lexicon, a word such as CAN has high closeness centrality, because it is close to many other words in the lexicon. Whereas, a word such as CURE has low closeness centrality because it is far from other words in the lexicon. In an auditory lexical decision task (Experiment 1) participants responded more quickly to words with high closeness centrality. In Experiment 2 an auditory lexical decision task was again used, but with a wider range of stimulus characteristics. Although, there was no main effect of closeness centrality in Experiment 2, an interaction between closeness centrality and frequency of occurrence was observed on reaction times. The results are explained in terms of partial activation gradually strengthening over time word-forms that are centrally located in the phonological network

    Identifying the phonological backbone in the mental lexicon

    Get PDF
    Previous studies used techniques from network science to identify individual nodes and a set of nodes that were “important” in a network of phonological word-forms from English. In the present study we used a network simplification process—known as the backbone—that removed redundant edges to extract a subnetwork of “important” words from the network of phonological word-forms. The backbone procedure removed 68.5% of the edges in the original network to extract a backbone with a giant component containing 6,211 words. We compared psycholinguistic and network measures of the words in the backbone to the words that did not survive the backbone extraction procedure. Words in the backbone occurred more frequently in the language, were shorter in length, were similar to more phonological neighbors, and were closer to other words than words that did not survive the backbone extraction procedure. Words in the backbone of the phonological network might form a “kernel lexicon”—a small but essential set of words that allows one to communicate in a wide-range of situations—and may provide guidance to clinicians and researchers on which words to focus on to facilitate typical development, or to accelerate rehabilitation efforts. The backbone extraction method may also prove useful in other applications of network science to the speech, language, hearing and cognitive sciences

    Investigating the Influence of Inverse Preferential Attachment on Network Development

    Get PDF
    This work is licensed under a Creative Commons Attribution 4.0 International License.Recent work investigating the development of the phonological lexicon, where edges between words represent phonological similarity, have suggested that phonological network growth may be partly driven by a process that favors the acquisition of new words that are phonologically similar to several existing words in the lexicon. To explore this growth mechanism, we conducted a simulation study to examine the properties of networks grown by inverse preferential attachment, where new nodes added to the network tend to connect to existing nodes with fewer edges. Specifically, we analyzed the network structure and degree distributions of artificial networks generated via either preferential attachment, an inverse variant of preferential attachment, or combinations of both network growth mechanisms. The simulations showed that network growth initially driven by preferential attachment followed by inverse preferential attachment led to densely-connected network structures (i.e., smaller diameters and average shortest path lengths), as well as degree distributions that could be characterized by non-power law distributions, analogous to the features of real-world phonological networks. These results provide converging evidence that inverse preferential attachment may play a role in the development of the phonological lexicon and reflect processing costs associated with a mature lexicon structure.University of Kansa

    Examining the acquisition of phonological word-forms with computational experiments

    Get PDF
    This is the author's accepted manuscript. The original publication is available at http://las.sagepub.com/content/early/2012/10/21/0023830912460513.full.pdfIt has been hypothesized that known words in the lexicon strengthen newly formed representations of novel words, resulting in words with dense neighborhoods being learned more quickly than words with sparse neighborhoods. Tests of this hypothesis in a connectionist network showed that words with dense neighborhoods were learned better than words with sparse neighborhoods when the network was exposed to the words all at once (Experiment 1), or gradually over time, like human word-learners (Experiment 2). This pattern was also observed despite variation in the availability of processing resources in the networks (Experiment 3). A learning advantage for words with sparse neighborhoods was observed only when the network was initially exposed to words with sparse neighborhoods and exposed to dense neighborhoods later in training (Experiment 4). The benefits of computational experiments for increasing our understanding of language processes and for the treatment of language processing disorders are discussed

    Phonological similarity influences word learning in adults learning Spanish as a foreign language

    Get PDF
    Neighborhood density—the number of words that sound similar to a given word (Luce & Pisoni, 1998)—influences word-learning in native English speaking children and adults (Storkel, 2004; Storkel, Armbruster, & Hogan, 2006): novel words with many similar sounding English words (i.e., dense neighborhood) are learned more quickly than novel words with few similar sounding English words (i.e., sparse neighborhood). The present study examined how neighborhood density influences word-learning in native English speaking adults learning Spanish as a foreign language. Students in their third-semester of Spanish language classes learned advanced Spanish words that sounded similar to many known Spanish words (i.e., dense neighborhood) or sounded similar to few known Spanish words (i.e., sparse neighborhood). In three word-learning tasks, performance was better for Spanish words with dense rather than sparse neighborhoods. These results suggest that a similar mechanism may be used to learn new words in a native and a foreign language

    The origins of Zipf's meaning-frequency law

    Get PDF
    In his pioneering research, G.K. Zipf observed that more frequent words tend to have more meanings, and showed that the number of meanings of a word grows as the square root of its frequency. He derived this relationship from two assumptions: that words follow Zipf's law for word frequencies (a power law dependency between frequency and rank) and Zipf's law of meaning distribution (a power law dependency between number of meanings and rank). Here we show that a single assumption on the joint probability of a word and a meaning suffices to infer Zipf's meaning-frequency law or relaxed versions. Interestingly, this assumption can be justified as the outcome of a biased random walk in the process of mental exploration.Peer ReviewedPostprint (published version

    The Influence of the Phonological Neighborhood Clustering-Coefficient on Spoken Word Recognition

    Get PDF
    This article may not exactly replicate the final version published in the APA journal. It is not the copy of record.Clustering coefficient—a measure derived from the new science of networks—refers to the proportion of phonological neighbors of a target word that are also neighbors of each other. Consider the words bat, hat, and can, all of which are neighbors of the word cat; the words bat and hat are also neighbors of each other. In a perceptual identification task, words with a low clustering coefficient (i.e., few neighbors are neighbors of each other) were more accurately identified than words with a high clustering coefficient (i.e., many neighbors are neighbors of each other). In a lexical decision task, words with a low clustering coefficient were responded to more quickly than words with a high clustering coefficient. These findings suggest that the structure of the lexicon, that is the similarity relationships among neighbors of the target word measured by clustering coefficient, influences lexical access in spoken word recognition. Simulations of the TRACE and Shortlist models of spoken word recognition failed to account for the present findings. A framework for a new model of spoken word recognition is proposed
    • …
    corecore